trust problem
Resolving Artificial Intelligence's Trust Problem
AI needs a balance between speed and fairness. Is artificial intelligence getting a bad rap? Only half (50%) of consumers trust companies that use AI as much as they trust other companies, a survey of 19,504 people published by World Economic Forum finds. At the same time, the more familiar people are with AI, the greater their level of trust. The likelihood to trust companies that use AI as much as other companies is highest among business decision-makers (62%) and business owners (61%), the WEF survey shows.
Trust in AI: Interpretability is not necessary or sufficient, while black-box interaction is necessary and sufficient
The problem of human trust in artificial intelligence is one of the most fundamental problems in applied machine learning. Our processes for evaluating AI trustworthiness have substantial ramifications for ML's impact on science, health, and humanity, yet confusion surrounds foundational concepts. What does it mean to trust an AI, and how do humans assess AI trustworthiness? What are the mechanisms for building trustworthy AI? And what is the role of interpretable ML in trust? Here, we draw from statistical learning theory and sociological lenses on human-automation trust to motivate an AI-as-tool framework, which distinguishes human-AI trust from human-AI-human trust. Evaluating an AI's contractual trustworthiness involves predicting future model behavior using behavior certificates (BCs) that aggregate behavioral evidence from diverse sources including empirical out-of-distribution and out-of-task evaluation and theoretical proofs linking model architecture to behavior. We clarify the role of interpretability in trust with a ladder of model access. Interpretability (level 3) is not necessary or even sufficient for trust, while the ability to run a black-box model at-will (level 2) is necessary and sufficient. While interpretability can offer benefits for trust, it can also incur costs. We clarify ways interpretability can contribute to trust, while questioning the perceived centrality of interpretability to trust in popular discourse. How can we empower people with tools to evaluate trust? Instead of trying to understand how a model works, we argue for understanding how a model behaves. Instead of opening up black boxes, we should create more behavior certificates that are more correct, relevant, and understandable. We discuss how to build trusted and trustworthy AI responsibly.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > South Korea > Seoul > Seoul (0.06)
- North America > United States > New York > New York County > New York City (0.05)
- (8 more...)
- Transportation > Air (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
A.I. Doctors Have a Trust Problem
Imagine you're a 59-year old man, and you go to your doctor with chest pains. The doctor thinks it might be a heart attack and orders further tests. Now, imagine you're a 59-year old woman with the same symptoms. The doctor tells you that you're probably having a panic attack. These strikingly different suggestions, however, didn't come from a doctor, but from a popular health care app called GP at Hand, which uses artificial intelligence to tell you what might be wrong with you based on your symptoms.
- Europe > United Kingdom (0.16)
- North America > United States > Connecticut (0.05)
How to Ensure Trust in a Digital World?
We have a trust issue. In our digital world, it has become increasingly difficult to trust each other. Whether it is another person, an organisation or a device, trust is no longer a given online. This is a serious problem for our society and our democracy. If trust is lacking in society, anarchy can be expected.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.48)
- Information Technology > Communications > Social Media (0.36)
Tools Tackle AI's Bias, Trust Problem - InformationWeek
Is your algorithm fair and unbiased? How can you be sure that the insights it offers are correct? It's a question that's being asked with increasing frequency in the last year. That's because when it comes to machine learning, data goes into a "black box" and insights emerge on the other side. The algorithm itself is inside this so-called black box.
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)